33 research outputs found

    Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects

    Get PDF
    These are the Proceedings of the 2nd IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Novel Devices and Interaction Concepts for Fluid Collaboration

    Get PDF
    This thesis addresses computer-augmented collaborative work. More precisely, it focuses on co-located collaboration where co-workers get together at the same place, usually a meeting room. We assume co-workers to use both mobile devices (i.e. hand-held devices) and a static device (i.e., interactive table). These devices provide multiple output modalities, such as visual output and sound output. The co-workers are assumed to process digital content (e.g., document, videos etc.). According to both common experience and scientific evidence, co-workers often switch between rather individual, self directed work and tightly shared group work; these working styles are denoted as loose and tight collaboration, respectively. The overarching goal of this thesis is to better support seamless transitions between loose and tight collaboration, denoted as fluid collaboration. In order to support such fluid transitions between the two working styles, we have to reflect and mitigate conflicting requirements for both output modalities. In tight collaboration, co-workers appreciate proximity and equal access to content; both workspaces and content are shared. In loose collaboration, co-workers desire sufficient space of their own and minimal interference of their contents and interaction. It was shown that in conventional settings (e.g., interactive tables), a transition between tight and loose collaboration leads to limited personal workspace and thereby to workspace interference, clutter and other constraints. During collaboration, such interference concerns both visual and sound output. In light of these facts, further research on interactive devices (e.g., interactive tables and mobile devices) is needed to support fluid collaboration with different output modalities. These observations lead to the central research question of this thesis: How to support fluid co-located collaboration using visual and sound content? This thesis explores this question in three main research directions: (1) surface-based interaction, (2) spatial interaction and (3) embodied sound interaction, while (1) and (2) address visual content, (3) focuses on auditory content. In each direction, we conceptualized, implemented, and evaluated a set of device concepts plus corresponding interaction concepts, respectively. The first research direction, Surface-Based Interaction, contributes a novel tabletop, called Permulin, that provides (1) a group view providing a common ground during phases of tight collaboration, (2) private full screen views for each collaborator to scaffold loosely coupled collaboration and (3) interaction and visualization techniques for sharing content in-between these views for coordination and mutual awareness. Results from an exploratory study and from a controlled experiment provide evidence for the following advancements: (1) Permulin supports fluid collaboration by allowing the user to transition fluidly between loose and tight collaboration. (2) Users perceive and use Permulin as both a cooperative and an individual device. Amongst others, this is reflected by participants occupying significantly larger interaction areas on Permulin than on a tabletop system. (3) Permulin provides unique awareness properties: participants were highly aware of each other and of their interactions during tightly coupled collaboration, while being able to unobtrusively perform individual work during loosely coupled collaboration. In the second research direction, Spatial Interaction, we simulate future paper-like display devices and investigate how well-known collaboration and interaction techniques with paper documents can be transferred to the field of video navigation based on such devices. Thereby we contribute a device concept and interaction techniques that allows multiple users to collaboratively process collections of videos on multiple paper-like displays. It enables users to navigate in video collections, create an overview of multiple videos, and structure and organize video contents. The proposed approach, coined as CoPaperVideo, leverages physical arrangement of the devices. Results of two user studies indicate that our spatial interaction concepts allow users to flexibly organize and structure multiple videos in physical space and to easily and seamlessly transition between individual and group work. In addition, the spatial interaction concepts leverage the 3D space for interaction and for mitigating space limitations. The first two research directions contribute novel devices and interaction concepts for visual content. Visual interfaces are particularly suitable for collaboration, because they afford direct manipulation of visual content. However, while current devices support both visual and sound output, there is still a lack of suitable devices and interaction concepts for a collaborative direct manipulation of sound content. Hence, the third research direction, Embodied Sound Interaction, explores novel devices and interaction concepts for direct manipulation of sound for fluid collaboration. First, we contribute interfaces that enable users to control sound individually by means of body-based interaction. The concept focuses on the body part where sound is perceived: a user’s own ear. Second, direct manipulation of sound is supported through spatial control of sound sources. Virtual sound sources are situated in 3D space and physically associated with spatially aware paper-like displays that embed videos. By physically moving these displays, each user can then control - and focus on - multiple sound sources individually or collaboratively. The evaluation supports our hypothesis that our embodied sound interaction concepts provide effective sound support for users during fluid collaboration

    Novel Devices and Interaction Concepts for Fluid Collaboration

    No full text
    This thesis addresses computer-augmented collaborative work. More precisely, it focuses on co-located collaboration where co-workers get together at the same place, usually a meeting room. We assume co-workers to use both mobile devices (i.e. hand-held devices) and a static device (i.e., interactive table). These devices provide multiple output modalities, such as visual output and sound output. The co-workers are assumed to process digital content (e.g., document, videos etc.). According to both common experience and scientific evidence, co-workers often switch between rather individual, self directed work and tightly shared group work; these working styles are denoted as loose and tight collaboration, respectively. The overarching goal of this thesis is to better support seamless transitions between loose and tight collaboration, denoted as fluid collaboration. In order to support such fluid transitions between the two working styles, we have to reflect and mitigate conflicting requirements for both output modalities. In tight collaboration, co-workers appreciate proximity and equal access to content; both workspaces and content are shared. In loose collaboration, co-workers desire sufficient space of their own and minimal interference of their contents and interaction. It was shown that in conventional settings (e.g., interactive tables), a transition between tight and loose collaboration leads to limited personal workspace and thereby to workspace interference, clutter and other constraints. During collaboration, such interference concerns both visual and sound output. In light of these facts, further research on interactive devices (e.g., interactive tables and mobile devices) is needed to support fluid collaboration with different output modalities. These observations lead to the central research question of this thesis: How to support fluid co-located collaboration using visual and sound content? This thesis explores this question in three main research directions: (1) surface-based interaction, (2) spatial interaction and (3) embodied sound interaction, while (1) and (2) address visual content, (3) focuses on auditory content. In each direction, we conceptualized, implemented, and evaluated a set of device concepts plus corresponding interaction concepts, respectively. The first research direction, Surface-Based Interaction, contributes a novel tabletop, called Permulin, that provides (1) a group view providing a common ground during phases of tight collaboration, (2) private full screen views for each collaborator to scaffold loosely coupled collaboration and (3) interaction and visualization techniques for sharing content in-between these views for coordination and mutual awareness. Results from an exploratory study and from a controlled experiment provide evidence for the following advancements: (1) Permulin supports fluid collaboration by allowing the user to transition fluidly between loose and tight collaboration. (2) Users perceive and use Permulin as both a cooperative and an individual device. Amongst others, this is reflected by participants occupying significantly larger interaction areas on Permulin than on a tabletop system. (3) Permulin provides unique awareness properties: participants were highly aware of each other and of their interactions during tightly coupled collaboration, while being able to unobtrusively perform individual work during loosely coupled collaboration. In the second research direction, Spatial Interaction, we simulate future paper-like display devices and investigate how well-known collaboration and interaction techniques with paper documents can be transferred to the field of video navigation based on such devices. Thereby we contribute a device concept and interaction techniques that allows multiple users to collaboratively process collections of videos on multiple paper-like displays. It enables users to navigate in video collections, create an overview of multiple videos, and structure and organize video contents. The proposed approach, coined as CoPaperVideo, leverages physical arrangement of the devices. Results of two user studies indicate that our spatial interaction concepts allow users to flexibly organize and structure multiple videos in physical space and to easily and seamlessly transition between individual and group work. In addition, the spatial interaction concepts leverage the 3D space for interaction and for mitigating space limitations. The first two research directions contribute novel devices and interaction concepts for visual content. Visual interfaces are particularly suitable for collaboration, because they afford direct manipulation of visual content. However, while current devices support both visual and sound output, there is still a lack of suitable devices and interaction concepts for a collaborative direct manipulation of sound content. Hence, the third research direction, Embodied Sound Interaction, explores novel devices and interaction concepts for direct manipulation of sound for fluid collaboration. First, we contribute interfaces that enable users to control sound individually by means of body-based interaction. The concept focuses on the body part where sound is perceived: a user’s own ear. Second, direct manipulation of sound is supported through spatial control of sound sources. Virtual sound sources are situated in 3D space and physically associated with spatially aware paper-like displays that embed videos. By physically moving these displays, each user can then control - and focus on - multiple sound sources individually or collaboratively. The evaluation supports our hypothesis that our embodied sound interaction concepts provide effective sound support for users during fluid collaboration

    Interaction Techniques for Co-located Collaborative TV

    No full text
    We propose a number of interaction techniques allowing TV viewers to use their mobile phones to view and share content with others in the room, thus supporting local social interaction. Based on a preliminary evaluation, we provide guidelines for designing interactions to support co-located collaborative TV viewing

    Interaction techniques for co-located collaborative TV

    No full text
    We propose a number of interaction techniques allowing TV viewers to use their mobile phones to view and share content with others in the room, thus supporting local social interaction. Based on a preliminary evaluation, we provide guidelines for designing interactions to support co-located collaborative TV viewing

    Toward Mobile Video Interaction with Rollable Displays

    No full text
    Recent technological advances in creating thin-film, rollable displays indicate their prospective implementation in mobile devices. In this position paper, we argue that such displays have great potential to advance the field of mobile video interaction due to their flexible screen size and rich physical interaction capabilities. To support the discussion, we depict exemplary interaction concepts and outline promising research directions, which shall guide future research

    EarPut: augmenting behind-the-ear devices for ear-based interaction

    No full text
    In this work-in-progress paper, we make a case for leveraging the unique affordances of the human ear for eyes-free, mobile interaction. We present EarPut, a novel interface concept, which instruments the ear as an interactive surface for touch-based interactions and its prototypical hardware implementation. The central idea behind EarPut is to go beyond prior work by unobtrusively augmenting a variety of accessories that are worn behind the ear, such as headsets or glasses. Results from a controlled experiment with 27 participants provide empirical evidence that people are able to target salient regions on their ear effectively and precisely. Moreover, we contribute a first, systematically derived interaction design space for ear-based interaction and a set of exemplary applications

    Permulin: personal in- and output on interactive surfaces

    No full text
    Interactive tables are well suited for co-located collaboration. Most prior research assumed users to share the same overall display output; a key challenge was the appropriate partitioning of screen real estate, assembling the right information “at the users’ fingertips” through simultaneous input. A different approach is followed in recent multi-view display environments: they offer personal output for each team member, yet risk to dissolve the team due to the lack of a common visual focus. Our approach combines both lines of thought, guided by the question: “What if the visible output and simultaneous input was partly shared and partly private?” We present Permulin as a concrete corresponding implementation, based on a set of novel interaction concepts that support fluid transitions between individual and group activities, coordination of group activities, and concurrent, distraction-free in-place manipulation. Study results indicate that users are able to focus on individual work on the whole surface without notable mutual interference, while at the same time establishing a strong sense of collaboration

    Interacting with Videos On Paper-like Displays

    No full text
    Analog paper is still often preferred over electronic documents due to specific affordances and rich spatial interaction, in particular if multiple pages are laid out and handled simultaneously. We investigated how interaction with video can benefit from paper-like displays that support interaction with motion and sound. We present a system that includes novel interaction concepts for both video and audio. This includes spatial techniques for temporal navigation, arranging and grouping of videos, virtualizing and materializing contents, as well as focusing on multiple parallel audio sources

    Collaboration on Interactive Surfaces with Personal In- and Output

    No full text
    The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder
    corecore